COMP3161/9164 Concepts of Programming Languages
Term 3, 2024

Tute - Completed (Week 9)

Table of Contents

1. Type Inference

1.1. Examples

What is the most general type of the following implicitly-typed MinHs expressions?

  1. recfun f x = ((fst x) + 1)
  2. InL (InR True)
  3. recfun f x = if (fst x) then (snd x) else f x
  4. recfun f x = if (fst x) then (InL x) else (InR x)

What would their explicitly-typed equivalents be?

  1. \(\forall a.\ \texttt{Int} \times a \rightarrow \texttt{Int}\)
  2. \(\forall a.\ \forall b.\ (b + \texttt{Bool}) + a\)
  3. \(\forall b.\ (\texttt{Bool} \times b) \rightarrow b\)
  4. \(\forall b.\ (\texttt{Bool} \times b) \rightarrow ((\texttt{Bool} \times b) + (\texttt{Bool} \times b))\)

The explicitly typed versions are:

  1. type a. recfun f :: (Int * a -> Int) x = ((fst@Int@a x) + 1)
  2. type a. type b. InL@(b + Bool)@a (InR@b@Bool True)
  3. type b. recfun f x = if (fst@Bool@b x) then (snd@Bool@b x) else f x
  4. type b. recfun f x = if (fst@Bool@b x) then (InL@(Bool * b)@(Bool * b) x) else (InR@(Bool * b)@(Bool * b) x)

1.2. Inference Algorithm

In the lecture, we introduced two sets of typing rules for an implicitly typed variant of MinHs. Explain the difference between these rules.

The first set of rules don't specify an algorithm, as several rules require us to guess what types to put in the context, and several other rules (forall introduction and elimination) can be applied at any time.

The second set of rules fixed that problem by allowing unification variables (aka flexible type variables) to occur inside types, standing for types that are not yet known. As we do type checking, whenever we want two types to be equal, we find a way to extend the context with constraints on the unification variables unification variables such that the types become the same. In this way, as we complete type checking, we will generate a context that, when applied as substitutions to the types, will produce the most general type for a given term.

2. Existential Types

Existential types are often over-used in languages that support them. For each of the following existential types, propose a non-existential type that could be used instead:

  1. \(\exists t.\ t \times (t \rightarrow \texttt{String})\)
  2. \(\exists t.\ t \times (\texttt{Int} \times t \rightarrow t)\)
  3. \(\exists t.\ (t \rightarrow t \times t) \times (t \times t \rightarrow t)\)
  1. \(\texttt{String}\), seeing as consumers of the data type can only produce one string from a value of that type.
  2. \(\mathbf{1}\), seeing as there is no way to extract any information from that data type.
  3. \(\mathbf{1}\), seeing as there is no way to extract any information or indeed to construct an instance of the abstract data type.

3. Monadic programming

Let's use the list monad! Here's a fun puzzle, that you can think about for a while if you want.

  1    2    3    4    5
  🪦   🪦   🪦   🪦   🪦 

A mole is disturbing a row of five graves. The mole is always
underneath one of the graves.

1. Each day, we may attempt to catch the mole by
   digging up a grave. If we found the mole, we win;
   otherwise, we put the grave back in order and go to sleep.
2. Each night, the mole must move from its current position to an
   adjacent grave.

Find a sequence of digs that is guaranteed to find the mole.
  1. Write a function move_mole : Int -> [Int] such that, if the mole is at grave n initially, move_mole n is the list of graves the mole might be at after its nightly move.

    move_mole :: Int -> [Int]
    move_mole 1 = [2]
    move_mole 5 = [4]
    move_mole n = [n-1,n+1]
    
  2. Write a function kill_mole : Int -> Int -> [Int]. If we dig at grave d, and the mole is at position m, kill_mole d m should return the empty list if we found the mole, and [m] if the mole is still at loose.

    kill_mole :: Int -> Int -> [Int]
    kill_mole n m = if n == m then [] else [m]
    
  3. Let's define the list monad! Write Haskell functions that inhabit the following type signatures, and think about whether they satisfy the monad laws:

    myReturn :: a -> [a]
    myBind   :: [a] -> (a -> [b]) -> [b]
    
    myReturn x = [x]
    myBind xs f = concat(map f xs)
    

    We can check the monad laws algebraically as follows (doing this in the tute might be overkill):

    return x >>= f =
    concat (map f [x]) =
    concat([f x]) =
    f x
    
    xs >>= return =
    concat (map return xs) =
    xs
    
    xs >>= (\x -> f x >>= g) =
    concat (map (\x -> f x >>= g) xs) =
    concat (map (\x -> concat (map g (f x))) xs) =
    concat (map (concat . map g . f) xs) =
    concat (map concat (map (map g . f) xs)) =
    concat (concat (map (map g . f) xs)) =
    concat (concat (map (map g) (map f xs))) =
    concat (map g (concat (map f xs))) =
    concat (map g (xs >>= f)) =
    (xs >>= f) >>= g
    

    The last derivation here uses the free theorems of map and concat, eta-equivalence, and the fact that concat(concat l) = concat(map concat l)

  4. Here's an implementation of a move function for following a sequence of moves. If the mole is initially at position m, and we perform the sequence of digs xs, then move xs m is the list of final positions the mole could be at. (The final result may contain duplicates but we don't care.) Can you use the list monad (either the one you just defined, or the built-in one in Haskell) to simplify it?

    move :: [Int] -> Int -> [Int]
    move [] m = [m]
    move (x:xs) m =
      let ys = kill_mole x m
          zs = concat (map move_mole ys)
      in
        concat (map (move xs) zs)
    
    move :: [Int] -> Int -> [Int]
    move (n:ns) pos =
      do
        pos' <- kill_mole n pos
        pos'' <- move_mole pos'
        move ns pos''
    move [] pos = return pos
    
  5. Using do notation and move, write a function play : [Int] -> [Int] that returns the possible final locations of the mole after we've performed a sequence of moves, regardless of the mole's initial position.

    play xs =
      do
        pos <- [1,2,3,4,5]
        move xs pos
    

2024-11-28 Thu 20:06

Announcements RSS